A combined penalty function and outer-approximation method for MINLP optimization

نویسندگان

  • J Viswanathan
  • Ignacio E. Grossmann
  • J. Viswanathan
چکیده

An improved outer-approximation algorithm for MINLP optimization is proposed in this paper which is aimed at the solution of problems where convexity conditions may not hold. The proposed algorithm starts by solving the NLP relaxation. If an integer solution is not found, a sequence of iterations consisting of NLP subproblems and MILP master problems is solved. The proposed MILP master problem is based on the outerapproximation/equality-relaxation algorithm and features an exact penalty function that allows violations of linearizations of nonconvex constraints. The search proceeds until no improvement is found in the NLP subproblems. Computational experience is presented on a set of 18 test problems The results show that the proposed method has a high degree of reliability for finding the global optimum in nonconvex problems. Introduction There has been recently an increased interest in the development and application of nonlinear optimization algorithms that can handle both continuous and integer variables, especially of the 0-1 type. These problems, which are commonly referred to as mixedinteger nonlinear programming (MINLP) problems, have many applications in engineering design, planning, scheduling and marketing. Often the corresponding MINLP models exhibit special structures (e.g. graphs, networks, separable functions) that can be effectively exploited for developing specialized solution procedures. However, it is also very often the case, particularly in engineering design, that nonlinearities in the continuous variables do not exhibit any special form since they result from complex engineering models. Thus, there is clearly a strong motivation to develop MINLP algorithms that that are not overly restrictive in the assumptions of the form and properties of the functions that are involved. Among the general purpose algorithms for MINLP, we can cite branch and bound (Beale,1977; Gupta, 1980), Generalized Benders Decomposition, GBD, (Benders, 1962; Geoffrion, 1972), the Outer-Approximation/Equality-Relaxation Method, OA/ER (Duran and Grossmann, 1986; Kocis and Grossmann, 1987), and the Feasibility Technique (Murtagh and Mawengkang, 1986; Mawengkang, 1988). The branch and bound method has the drawback that it can require the solution of a large number of NLP subproblems in the search tree, unless the NLP relaxation is very tight. GBD has the advantage that one can exploit more readily special structures in the NLP subproblems, but has the drawback that it may require a significant number of major iterations where NLP subproblems and MILP master problems must be solved successively. The OA/ER algorithm has the advantage that it typically requires only few major iterations, but has the drawback that the size of its MILP master problem is considerably larger than in GBD. Finally, the Feasibility Technique requires the least computational expense since it is based on the idea of finding a feasible integer point that has the smallest local degradation with respect to the relaxed NLP solution. However, it has the drawback that it does not guarantee optimality. Other related procedures for MINLP have been reported by Yuan et al. (1987) who extended the OA algorithm for convex nonlinear 0-1 variables, and by Floudas et al. (1988) who applied partitioning of variables in GBD to induce convex NLP subproblems. The branch and bound, GBD and OA/ER algorithms require that some form of convexity assumption be satisfied in order to guarantee that they can find the global optimum of the MINLP. On the other hand, the OA/ER algorithm, which tends to be the most efficient method when the NLP subproblems are expensive or difficult to solve, is the most stringent in terms of convexity requirements. In particular, the OA/ER algorithm relies on assumptions of convexity of the functions f and g and also the quasi-convexity (resp. quasiconcavity) of nonlinear equality constraints, h (Kocis and Grossmann, 1987). When these conditions are met, the algorithm will determine the global optimum. Otherwise, the linearizations of the master problem can cut into the feasible region of candidate integer points which may result in sub-optimal solutions (Kocis and Grossmann, 1988a). To overcome this problem, a two-phase strategy was proposed by Kocis and Grossmann (1988a) where in the first phase the OA/ER was applied. In the second phase, linearizations of nonconvex functions are identified by local and global tests so as to relax the master problem. This scheme proved successful in locating the global optimum in about 80 % of test problems. The implementation of the local and global tests is, however, somewhat difficult and they are not guaranteed to identify all the nonconvexities. Motivated by observations with our experience in solving MINLP problems, it is the purpose of this paper to develop a new variant of the OA/ER algorithm which does not require the explicit identification of nonconvexities. As will be shown, this can be accomplished with a new MILP master problem that incorporates an augmented penalty function for the violation of linearizations of the nonlinear functions. Furthermore, the proposed algorithm (AP/OA/ER) has the important feature of not requiring the specification of an initial set of 0-1 variables since the algorithm starts with the solution of the relaxed NLP problem. Also, if appropriate convexity conditions hold, the AP/OA/ER algorithm has embedded the OA/ER algorithm. Numerical results are reported for a set of 18 test problems which arise in engineering design. Although convergence to the global optimum cannot be guaranteed, the numerical results suggest that the proposed algorithm is not only computationally efficient, but also very robust for finding the global optimum solution. Outline of the AP/OA/ER Algorithm We consider here the MINLP (mixed-integer nonlinear program) of the form: Min z s.t. Ay By Cy X€ ye • — + + + Y :C T y H h(x) 9(x): Dx < < = {x = {0, hf(x) = 0 SO ,'O € R:x£x<x} ,1 } m Here, x denotes the vector of continuous variables and y denotes the vector of binary variables corresponding to logical decisions (e.g. the existence of units). The functions f, g and h are defined over appropriate domains and have continuous partial derivatives. The matrices A, B, C and D have compatible dimensions. For each fixed binary vector y , we assume that the corresponding NLP (nonlinear program): Min z := cy + f (x) s.t. Ay»< + h (x) = 0 By + g (x) < 0 Cy + Dx <0 [P(y) l X € X = { x e R n : x L < x < x u } satisfies any of the constraint qualifications (Mangasarian, 1969) so that the solution vector is a KKT (Karush-Kuhn-Tucker) point. The algorithm that we propose involves the following steps : 1. Solve the NLP relaxation of (P) with y € Yr = { ye R , 0 < y < e } , where e is the unity vector, to obtain the KKT point (x°y )if y is integer, stop. Otherwise, go to step 2 1 2. Find an integer point y with an MILP master problem that features an augmented penalty function to find the minimum over the convex hull determined by the half-spaces at the KKT point (x^y) . 3. Solve the NLP [P(y)] at y to obtain the KKT point ( x V) • 2 4. Find an integer point y with the MILP master program that corresponds to the minimization over the intersection of the convex hulls determined by the half-spaces of the KKT points at y and y 5. Repeat steps 3 and 4 until there is an increase in the value of the NLP objective function. (Repeating step 4 means augmenting the set over which the minimization is performed with additional linearizations i.e , half-spaces at the new KKT point). The above algorithm is in the spirit of earlier algorithms proposed by Duran and Grossmann (1986) and Kocis and Grossmann (1987), but there are some important differences. In both the previously cited algorithms it was assumed that an initial integer point y was supplied so that steps 1 and 2 were absent. Also, the termination criterion used in these algorithms, viz: 5 . Repeat steps 3 and 4 until the objective function of the MILP was greater than or equal to the lowest value of the objective function among the previously solved NLP minima at fixed values of the integer vector y . is different from the one proposed here. While the OA/ER algorithm has proved to be quite successful in solving a variety of problems (Kocis and Grossmann, 1989), its major limitation has been that it relies on assumptions of convexity of the functions as discussed previously. For the algorithm proposed here, no assumptions concerning convexity of the functions in the MINLP are made. The main idea relies on the definition of a new MILP master problem the uses a linear approximation to an exact penalty function (Zhang, Kim and Lasdon, 1985), and therefore allows violations in the linearizations of the nonlinear functions. The algorithm is also based on extensive computational experience that has confirmed the desirability of starting with the solution of the relaxed NLP and the use of termination criterion 5 instead of 5 . The algorithm embeds the OA/ER algorithm in the case the assumptions concerning convexity of the latter are fulfilled. Although the proposed method has no theoretical guarantee of finding the global optimum, it was able to locate global optima in virtually all test problems despite the presence of nonconvexities in the MINLP problem. Our experience includes solving some challenging problems in distillation column design The following sections describe the three major items of the proposed algorithm: starting point, MILP master problem and the termination criterion. Implementation of the algorithm is discussed and numerical results are also presented. Starting point Both Generalized Benders Decomposition (Geoffrion, 1972) and the OA/ER algorithm (Duran and Grossmann, 1986; Kocis and Grossmann, 1987) assume that an initial integer value y is supplied. On the other hand, the branch and bound method (Gupta, 1980) and the feasibility technique of Mawengkang (1988) start the calculations by solving the relaxed MINLP problem. This means that : 1. The user need not provide an initial integer vector. 2. If the relaxed MINLP provides an integer solution, further calculations are not necessary. It is also reasonable to expect that the solution of the relaxed MINLP will provide very good estimates of the continuous variables and, hence, the linear approximation to the MINLP at this point is often of good quality. Thus, we begin computations by solving the relaxed MINLP: Min z := c y + f (x) s.t. Ay +h (x ) = 0 By + g (x) < 0 Cy + Dx < 0 x e X = { x € R n : x L < x < x u } ( 1 ) y e Y r = { y € R m , 0 < y < e } The solution to this problem may be obtained by any NLP solver such as MINOS, SQP, etc . 0 0 0 Let the solution be (x y ) If y is integer, we stop. Otherwise, we proceed for the search of an integer solution. Note that if problem (1) is infeasible or unbounded, the same is true of the original problem (P). As may be expected, the solution of the relaxed MINLP generally takes longer time to solve than the time required for the case of a NLP with fixed binary vector. Also, it should be noted that the NLP solution in (1) is only guaranteed to correspond to a global optimum if appropriate convexity conditions are satisfied (see Bazaraa and Shetty, 1979).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continuous Discrete Variable Optimization of Structures Using Approximation Methods

Optimum design of structures is achieved while the design variables are continuous and discrete. To reduce the computational work involved in the optimization process, all the functions that are expensive to evaluate, are approximated. To approximate these functions, a semi quadratic function is employed. Only the diagonal terms of the Hessian matrix are used and these elements are estimated fr...

متن کامل

PROSYN : an automated topology and parameter process synthesizer

This paper describes an improved, user friendly version of the computer package PROSYN a mixed-integer nonlinear programming (MINLP) process synthesizer. PROSYN is an implementation of the modeling and decomposition (M/D) strategy by Kocis and Grossmann (1989) and the outer approximation and equality relaxation algorithm (OA/ER) by Kocis and Grossmann (1987). Main characteristic of the new vers...

متن کامل

Optimal Pareto Parametric Analysis of Two Dimensional Steady-State Heat Conduction Problems by MLPG Method

Numerical solutions obtained by the Meshless Local Petrov-Galerkin (MLPG) method are presented for two dimensional steady-state heat conduction problems. The MLPG method is a truly meshless approach, and neither the nodal connectivity nor the background mesh is required for solving the initial-boundary-value problem. The penalty method is adopted to efficiently enforce the essential boundary co...

متن کامل

Structural synthesis using the MINLP optimization approach

This paper presents a structural synthesis using the Mixed-Integer Non-Linear Programming (MINLP) approach. The MINLP is a combined discrete/continuous optimization technique, where discrete binary 0-1 variables are defined for optimization of discrete alternatives and continuous variables for optimization of parameters. The MINLP optimization to a structural synthesis is performed through thre...

متن کامل

Mixed-integer non-linear programming approach to structural optimization

The paper presents the Mixed-Integer Non-Linear Programming (MINLP) approach to structural optimization. MINLP is a combined discrete/continuous optimization technique, where discrete binary 0-1 variables are defined for optimization of discrete alternatives and continuous variables for optimization of parameters. The MINLP optimization is performed through three steps: i.e. the generation of a...

متن کامل

Spectral gradient methods for linearly constrained optimization

Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasiNewton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian appro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015